-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
nut13 specify keyset ID integer size: 32 bits #189
base: main
Are you sure you want to change the base?
Conversation
@@ -42,7 +42,7 @@ The wallet starts with `counter_k := 0` upon encountering a new keyset and incre | |||
|
|||
#### Keyset ID | |||
|
|||
The integer representation `keyset_id_int` of a keyset is calculated from its [hexadecimal ID][02] which has a length of 8 bytes or 16 hex characters. First, we convert the hex string to a big-endian sequence of bytes. This value is then modulo reduced by `2^31 - 1` to arrive at an integer that is a unique identifier `keyset_id_int`. | |||
The 32 bit integer representation `keyset_id_int` of a keyset is calculated from its [hexadecimal ID][02] which has a length of 8 bytes or 16 hex characters. First, we convert the hex string to a big-endian sequence of bytes. This value is then modulo reduced by `2^31 - 1` to arrive at an integer that is a unique identifier `keyset_id_int`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am noticing something else here that is not per-se an issue but it's awkward: why reduce by 2^31-1
instead of 2^31
? But I think I know why:
q % 2^31 == q & (2^31-1)
. Normally when reducing modulo a power of 2 you can skip division and just use a mask to get the desired bits. Whoever wrote this first must have confused this or made a typo.
Now we can't change this back without breaking the protocol but I thought it was funny.
NACK. This is implementation specific (some languages do not have 32-bit int) and the intention is obvious from the provided examples. |
I disagree that it's obvious. I opened this PR because I found the language confusing. It seems to assume the term 'integer' means a 32 bit number. The python example is clear to me but the javascript example uses BigInt() twice to arrive at a number that fits into a regular int. I think the language does a good job explaining that the input is 8 bytes or 16 hex chars but does not explicitly say the size of the output. The size of the container of the function output is an implementation specific detail but it would be helpful to explain that the output of this function fits into 32 bits or 4 bytes or 8 hex chars. @prusnak would this language be better?
|
I like the idea of specifying that the keyset ID should be able to fit into a 32-bit integer. Maybe the spec should just constraint the range of valid values? |
The value is not stored anywhere - it is just used as an input to BIP32 child key derivation (CKD) function. Therefore I find it irrelevant whether the value is stored in 32-bit int, 64-bit int, 4 bytes etc. Because all that matters is what is the data type that CKD is expecting (and yeah, in statically typed languages it is usually uint32, but can be literally any int type). |
Ok, sure. My goal in suggesting this change is to make it more clear to devs implementing this spec. |
Is the intention of this integer representation of keyset ID to fit it into 32 bits? I ran into this implementation in cdk that produces a u64 output which seems wrong to me.
https://github.com/cashubtc/cdk/blob/main/crates/cdk/src/nuts/nut02.rs#L117
I am opening this PR to get some clarity.